
The volume of data created and processed globally is ballooning, thanks in no small part to AI. And data handling and privacy legislation is growing in complexity. Combined, these factors make data governance an increasingly relevant, critical, and urgent requirement. Yet, thanks again to AI, the challenges of data governance have become more knotty than ever, and this urgent need is not being met.
Let’s start by defining our terms: Data governance is the framework for managing, protecting, and responsibly using data. It includes policies and processes to ensure data quality, security, and personal privacy, and is generally measured in compliance with legislation and regulation.
Now let’s recognise the scale of the AI data challenge: 73% of organizations report having deployed AI tools enterprise-wide, 96% use genAI apps, and 98% have users accessing applications that provide genAI-powered features (such as LinkedIn, Moveworks, or Lattice). 100 of AI models are built using data, ingesting and producing data.
Today, half (50%) of organizations lack enforceable data protection policies for genAI apps. This simple insight into one dimension of responsible AI governance shows that even on a basic level, one in two organizations are unable to provably comply with the requirements of important data protection legislation such as GDPR. A reminder for anyone reading this who isn’t regularly kept up at night with this point: Fines for GDPR non-compliance can reach millions (or even billions) and the damage to brand trust can hurt more than the fine itself.
Regulations, and penalties for non-compliance, provide the impetus to make the important updates to data governance processes, with focus required to gain visibility and control over the way AI changes their data interactions and use.
Shadowy risks
When AI systems scale, compliance concerns collide with security. More people adopting AI in their day-to-day work increases the risk of sensitive information being shared with public AI models. Customer records, proprietary IP, source code, and more are at risk. In fact, source code exposure is present in nearly 50% of AI-related policy violations, showing how easily assets can be compromised through seemingly minor actions, such as optimizing a piece of code in an LLM.
Shadow AI (the use of AI tools that are not officially managed by corporate oversight) is exacerbating the problem, leaving security teams with a data visibility gap. Nearly four in ten (39%) employees use free AI tools at work, according to IDC, with another 17% using AI tools they privately pay for, highlighting the prevalence of AI that isn’t centrally managed.
The number of incidents of users sending sensitive data to AI apps has doubled in the past year, according to Netskope’s Cloud and Threat Report, with the average organization seeing 223 incidents per month. The urgency of the challenge is not being met with a similar urgency in constructing solutions. 68% of organizations rate their AI governance as only reactive or developing, and when looking into the capabilities of AI governance, the picture is even more stark: Just 7% have advanced governance over AI, with real-time enforcement capabilities.
In fact, a third of organizations describe fragmented adoption, with multiple teams using AI independently with no shared framework, standards, or security policies. Their top regret? 38% wish they’d started governance before adoption scaled. AI is being deployed at scale but governed in draft.
Existing data governance protocols probably pre-date generative AI, and gaps can emerge if not updated. AI platforms often lack transparency around data storage and use, increasing exposure to regulatory and security risks, especially as shadow AI grows.
Where to focus governance efforts
The speed of AI tool adoption has left some organizations falling behind in policy creation and management. 31% of organizations rely on written policies and employee compliance as their primary enforcement method, basically an honor system, yet research from KPMG found that only half of U.S. workers believe their organizations even have policies for responsible AI use. It is clear that many employees are making their own decisions, either without, or without knowledge of, official guidance.
The best place to start is to establish clear, acceptable-use policies for AI and data. Set out what data can be used for AI training, which datasets are off-limits, what third-party AI tools are managed, and what approval processes apply.
Next, focus on gaining greater visibility over where data resides and how it interacts with AI to close compliance gaps.
Say “yes” to secure innovation with AI
Granular policy enforcement should be the goal, rather than generalised blocking decisions because when security is seen to be overly draconian and obstructive users tend to find a way around it. Focus on controlling what data is shared with AI, helping keep sensitive or regulated data secure.
Look at AI governance as an integrated extension of existing data protection practices, extending existing zero trust principles and working to ensure you maintain a unified approach to your data security. This will avoid the duplication and administrative burdens of layering in point-products to address AI separately.
Data risk in the AI era is multi-faceted, requiring total oversight of the data journey. Netskope discovers and classifies sensitive information across the entire lifecycle—from the ingestion phase for model training, to real-time prompts and responses. We provide deep visibility into shadow AI use and data flows, ensuring your organization only uses the data it needs.
Crucially, we cover both pre- and post-production AI security, providing pre-deployment red teaming with automated adversarial simulations that harden private models against vulnerabilities before they go live. This ensures your intellectual property remains under your absolute control, compliant, and resilient against data exposure and model exploitation.
AI must be secure, but protections must avoid hindering organizational speed and user experience. Netskope One AI Security provides a single solution to govern your AI ecosystem and protect your data, without trade-offs. It secures users and automated agents across public SaaS, private AI tools, and agentic workflows. Combining high-performance with context-aware zero trust controls, we enable organizations to move on from AI experimentation to unlock AI advantage.
Download our AI Security Playbook for a practical guide to securing AI end-to-end.

Read the blog